MON-4029: Add KubeStateMetricsConfig to ClusterMonitoring API#2778
MON-4029: Add KubeStateMetricsConfig to ClusterMonitoring API#2778danielmellado wants to merge 1 commit intoopenshift:masterfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: LGTM mode |
|
@danielmellado: This pull request references MON-4029 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Hello @danielmellado! Some important instructions when contributing to openshift/api: |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@everettraven this PR continues #2461 and rebases and adds your comments. Thanks! |
📝 WalkthroughWalkthroughAdded an optional Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (2 inconclusive)
✅ Passed checks (10 passed)
✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.11.4)Error: build linters: unable to load custom analyzer "kubeapilinter": tools/_output/bin/kube-api-linter.so, plugin: not implemented Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 1117-1234: The CRD currently documents invariants but doesn't
enforce them; update the schema for the PodTopologySpread-like object to:
restrict whenUnsatisfiable to an enum of allowed values (e.g.,
"DoNotSchedule","ScheduleAnyway") on the whenUnsatisfiable property, add
minimum: 1 (and disallow 0) on maxSkew (format int32) and minimum: 1 on
minDomains (or use minimum: 1 when present), and add a validation rule that
requires labelSelector when matchLabelKeys is set (make matchLabelKeys mutually
exclusive with labelSelector and/or add a x-kubernetes-requirements or OpenAPI
dependency that enforces labelSelector presence when matchLabelKeys exists).
Target the properties named whenUnsatisfiable, maxSkew, minDomains,
matchLabelKeys and labelSelector in the CRD fragment to implement these
constraints.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: a91c322b-92fd-43a0-a96f-b2b4da86aa91
⛔ Files ignored due to path filters (3)
config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**openapi/generated_openapi/zz_generated.openapi.gois excluded by!openapi/**
📒 Files selected for processing (5)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yamlconfig/v1alpha1/types_cluster_monitoring.goconfig/v1alpha1/zz_generated.deepcopy.goconfig/v1alpha1/zz_generated.swagger_doc_generated.gopayload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml
| matchLabelKeys: | ||
| description: |- | ||
| MatchLabelKeys is a set of pod label keys to select the pods over which | ||
| spreading will be calculated. The keys are used to lookup values from the | ||
| incoming pod labels, those key-value labels are ANDed with labelSelector | ||
| to select the group of existing pods over which spreading will be calculated | ||
| for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. | ||
| MatchLabelKeys cannot be set when LabelSelector isn't set. | ||
| Keys that don't exist in the incoming pod labels will | ||
| be ignored. A null or empty list means only match against labelSelector. | ||
|
|
||
| This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). | ||
| items: | ||
| type: string | ||
| type: array | ||
| x-kubernetes-list-type: atomic | ||
| maxSkew: | ||
| description: |- | ||
| MaxSkew describes the degree to which pods may be unevenly distributed. | ||
| When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference | ||
| between the number of matching pods in the target topology and the global minimum. | ||
| The global minimum is the minimum number of matching pods in an eligible domain | ||
| or zero if the number of eligible domains is less than MinDomains. | ||
| For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same | ||
| labelSelector spread as 2/2/1: | ||
| In this case, the global minimum is 1. | ||
| | zone1 | zone2 | zone3 | | ||
| | P P | P P | P | | ||
| - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; | ||
| scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) | ||
| violate MaxSkew(1). | ||
| - if MaxSkew is 2, incoming pod can be scheduled onto any zone. | ||
| When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence | ||
| to topologies that satisfy it. | ||
| It's a required field. Default value is 1 and 0 is not allowed. | ||
| format: int32 | ||
| type: integer | ||
| minDomains: | ||
| description: |- | ||
| MinDomains indicates a minimum number of eligible domains. | ||
| When the number of eligible domains with matching topology keys is less than minDomains, | ||
| Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. | ||
| And when the number of eligible domains with matching topology keys equals or greater than minDomains, | ||
| this value has no effect on scheduling. | ||
| As a result, when the number of eligible domains is less than minDomains, | ||
| scheduler won't schedule more than maxSkew Pods to those domains. | ||
| If value is nil, the constraint behaves as if MinDomains is equal to 1. | ||
| Valid values are integers greater than 0. | ||
| When value is not nil, WhenUnsatisfiable must be DoNotSchedule. | ||
|
|
||
| For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same | ||
| labelSelector spread as 2/2/2: | ||
| | zone1 | zone2 | zone3 | | ||
| | P P | P P | P P | | ||
| The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. | ||
| In this situation, new pod with the same labelSelector cannot be scheduled, | ||
| because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, | ||
| it will violate MaxSkew. | ||
| format: int32 | ||
| type: integer | ||
| nodeAffinityPolicy: | ||
| description: |- | ||
| NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector | ||
| when calculating pod topology spread skew. Options are: | ||
| - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. | ||
| - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. | ||
|
|
||
| If this value is nil, the behavior is equivalent to the Honor policy. | ||
| type: string | ||
| nodeTaintsPolicy: | ||
| description: |- | ||
| NodeTaintsPolicy indicates how we will treat node taints when calculating | ||
| pod topology spread skew. Options are: | ||
| - Honor: nodes without taints, along with tainted nodes for which the incoming pod | ||
| has a toleration, are included. | ||
| - Ignore: node taints are ignored. All nodes are included. | ||
|
|
||
| If this value is nil, the behavior is equivalent to the Ignore policy. | ||
| type: string | ||
| topologyKey: | ||
| description: |- | ||
| TopologyKey is the key of node labels. Nodes that have a label with this key | ||
| and identical values are considered to be in the same topology. | ||
| We consider each <key, value> as a "bucket", and try to put balanced number | ||
| of pods into each bucket. | ||
| We define a domain as a particular instance of a topology. | ||
| Also, we define an eligible domain as a domain whose nodes meet the requirements of | ||
| nodeAffinityPolicy and nodeTaintsPolicy. | ||
| e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. | ||
| And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. | ||
| It's a required field. | ||
| type: string | ||
| whenUnsatisfiable: | ||
| description: |- | ||
| WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy | ||
| the spread constraint. | ||
| - DoNotSchedule (default) tells the scheduler not to schedule it. | ||
| - ScheduleAnyway tells the scheduler to schedule the pod in any location, | ||
| but giving higher precedence to topologies that would help reduce the | ||
| skew. | ||
| A constraint is considered "Unsatisfiable" for an incoming pod | ||
| if and only if every possible node assignment for that pod would violate | ||
| "MaxSkew" on some topology. | ||
| For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same | ||
| labelSelector spread as 3/1/1: | ||
| | zone1 | zone2 | zone3 | | ||
| | P P P | P | P | | ||
| If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled | ||
| to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies | ||
| MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler | ||
| won't make it *more* imbalanced. | ||
| It's a required field. | ||
| type: string | ||
| required: | ||
| - maxSkew | ||
| - topologyKey | ||
| - whenUnsatisfiable | ||
| type: object |
There was a problem hiding this comment.
Enforce documented topology constraint invariants in the schema
The descriptions document hard constraints, but the schema does not enforce several of them (whenUnsatisfiable allowed values, positive maxSkew/minDomains, and matchLabelKeys requiring labelSelector). Invalid CRs can pass admission and then fail later during reconciliation/pod creation.
🔧 Proposed schema validation additions
maxSkew:
description: |-
MaxSkew describes the degree to which pods may be unevenly distributed.
@@
format: int32
+ minimum: 1
type: integer
minDomains:
description: |-
MinDomains indicates a minimum number of eligible domains.
@@
format: int32
+ minimum: 1
type: integer
@@
whenUnsatisfiable:
description: |-
WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
@@
+ enum:
+ - DoNotSchedule
+ - ScheduleAnyway
type: string
+ x-kubernetes-validations:
+ - message: minDomains can only be set when whenUnsatisfiable is DoNotSchedule
+ rule: '!has(self.minDomains) || self.whenUnsatisfiable == ''DoNotSchedule'''
+ - message: matchLabelKeys cannot be set when labelSelector is not set
+ rule: '!has(self.matchLabelKeys) || has(self.labelSelector)'
required:
- maxSkew
- topologyKey
- whenUnsatisfiableAs per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`
around lines 1117 - 1234, The CRD currently documents invariants but doesn't
enforce them; update the schema for the PodTopologySpread-like object to:
restrict whenUnsatisfiable to an enum of allowed values (e.g.,
"DoNotSchedule","ScheduleAnyway") on the whenUnsatisfiable property, add
minimum: 1 (and disallow 0) on maxSkew (format int32) and minimum: 1 on
minDomains (or use minimum: 1 when present), and add a validation rule that
requires labelSelector when matchLabelKeys is set (make matchLabelKeys mutually
exclusive with labelSelector and/or add a x-kubernetes-requirements or OpenAPI
dependency that enforces labelSelector presence when matchLabelKeys exists).
Target the properties named whenUnsatisfiable, maxSkew, minDomains,
matchLabelKeys and labelSelector in the CRD fragment to implement these
constraints.
02919b5 to
361dc49
Compare
|
/hold until openshift/cluster-monitoring-operator#2553 merges |
|
/cc @rexagod |
361dc49 to
d134762
Compare
|
/unhold |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml (1)
1175-1292:⚠️ Potential issue | 🟠 MajorEnforce documented topology spread invariants in schema (still missing).
This block documents constraints, but schema-level enforcement is still incomplete (
whenUnsatisfiablevalues, positivemaxSkew/minDomains, andmatchLabelKeysrequiringlabelSelector). Invalid CRs can pass admission and fail later.🔧 Proposed schema validation patch
maxSkew: @@ format: int32 + minimum: 1 type: integer minDomains: @@ format: int32 + minimum: 1 type: integer @@ whenUnsatisfiable: @@ + enum: + - DoNotSchedule + - ScheduleAnyway type: string required: - maxSkew - topologyKey - whenUnsatisfiable type: object + x-kubernetes-validations: + - message: minDomains can only be set when whenUnsatisfiable is DoNotSchedule + rule: '!has(self.minDomains) || self.whenUnsatisfiable == ''DoNotSchedule''' + - message: matchLabelKeys cannot be set when labelSelector is not set + rule: '!has(self.matchLabelKeys) || has(self.labelSelector)'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml` around lines 1175 - 1292, The CRD currently documents invariants but doesn't enforce them; update the CRD's openAPIV3Schema to (1) restrict whenUnsatisfiable to the allowed enum values (e.g., "DoNotSchedule","ScheduleAnyway"), (2) add minimum: 1 for maxSkew and minDomains (and type integer) so maxSkew cannot be 0/negative and minDomains > 0, (3) add a conditional validation that when minDomains is set then whenUnsatisfiable must equal "DoNotSchedule", and (4) add validations requiring that matchLabelKeys cannot be present unless labelSelector is set and that keys in matchLabelKeys do not overlap with labelSelector keys; implement the conditional/complex checks using CRD CEL (x-kubernetes-validations) or equivalent CRD validation mechanisms referencing the fields matchLabelKeys, labelSelector, maxSkew, minDomains, and whenUnsatisfiable so invalid CRs are rejected at admission time.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 2145-2158: The test "Should reject KubeStateMetricsConfig with
duplicate resource in additionalResourceLabels" uses an invalid resource value
"jobs" which triggers enum validation before duplicate detection; update the two
entries under spec.kubeStateMetricsConfig.additionalResourceLabels to use a
valid resource (for example "pods" or another accepted enum value) so the YAML
passes resource validation and the duplicate-resource check is exercised by
ClusterMonitoring validation.
---
Duplicate comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 1175-1292: The CRD currently documents invariants but doesn't
enforce them; update the CRD's openAPIV3Schema to (1) restrict whenUnsatisfiable
to the allowed enum values (e.g., "DoNotSchedule","ScheduleAnyway"), (2) add
minimum: 1 for maxSkew and minDomains (and type integer) so maxSkew cannot be
0/negative and minDomains > 0, (3) add a conditional validation that when
minDomains is set then whenUnsatisfiable must equal "DoNotSchedule", and (4) add
validations requiring that matchLabelKeys cannot be present unless labelSelector
is set and that keys in matchLabelKeys do not overlap with labelSelector keys;
implement the conditional/complex checks using CRD CEL
(x-kubernetes-validations) or equivalent CRD validation mechanisms referencing
the fields matchLabelKeys, labelSelector, maxSkew, minDomains, and
whenUnsatisfiable so invalid CRs are rejected at admission time.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 5847c428-e700-46b3-84f5-f167574373e9
⛔ Files ignored due to path filters (6)
config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.deepcopy.gois excluded by!**/zz_generated*config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**config/v1alpha1/zz_generated.swagger_doc_generated.gois excluded by!**/zz_generated*openapi/generated_openapi/zz_generated.openapi.gois excluded by!openapi/**,!**/zz_generated*openapi/openapi.jsonis excluded by!openapi/**
📒 Files selected for processing (3)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yamlconfig/v1alpha1/types_cluster_monitoring.gopayload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- config/v1alpha1/types_cluster_monitoring.go
| - name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels | ||
| initial: | | ||
| apiVersion: config.openshift.io/v1alpha1 | ||
| kind: ClusterMonitoring | ||
| spec: | ||
| kubeStateMetricsConfig: | ||
| additionalResourceLabels: | ||
| - resource: jobs | ||
| labels: | ||
| - foo | ||
| - resource: jobs | ||
| labels: | ||
| - bar | ||
| expectedError: "Duplicate value" |
There was a problem hiding this comment.
Duplicate resource test uses invalid resource values.
This test intends to validate duplicate detection for additionalResourceLabels, but uses jobs which is not a valid resource value. The test may fail for the wrong reason (invalid enum) before reaching the duplicate validation. Use valid resource values to properly test duplicate detection.
🔧 Proposed fix
- name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels
initial: |
apiVersion: config.openshift.io/v1alpha1
kind: ClusterMonitoring
spec:
kubeStateMetricsConfig:
additionalResourceLabels:
- - resource: jobs
+ - resource: nodes
labels:
- foo
- - resource: jobs
+ - resource: nodes
labels:
- bar
expectedError: "Duplicate value"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels | |
| initial: | | |
| apiVersion: config.openshift.io/v1alpha1 | |
| kind: ClusterMonitoring | |
| spec: | |
| kubeStateMetricsConfig: | |
| additionalResourceLabels: | |
| - resource: jobs | |
| labels: | |
| - foo | |
| - resource: jobs | |
| labels: | |
| - bar | |
| expectedError: "Duplicate value" | |
| - name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels | |
| initial: | | |
| apiVersion: config.openshift.io/v1alpha1 | |
| kind: ClusterMonitoring | |
| spec: | |
| kubeStateMetricsConfig: | |
| additionalResourceLabels: | |
| - resource: nodes | |
| labels: | |
| - foo | |
| - resource: nodes | |
| labels: | |
| - bar | |
| expectedError: "Duplicate value" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`
around lines 2145 - 2158, The test "Should reject KubeStateMetricsConfig with
duplicate resource in additionalResourceLabels" uses an invalid resource value
"jobs" which triggers enum validation before duplicate detection; update the two
entries under spec.kubeStateMetricsConfig.additionalResourceLabels to use a
valid resource (for example "pods" or another accepted enum value) so the YAML
passes resource validation and the duplicate-resource check is exercised by
ClusterMonitoring validation.
|
@danielmellado could you please rebase this? Thanks again for the patch! :) |
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD, allowing configuration of the kube-state-metrics agent (node selectors, resources, tolerations, topology spread constraints). Includes comprehensive integration tests for validation. Signed-off-by: Daniel Mellado <dmellado@redhat.com> Signed-off-by: Daniel Mellado <dmellado@fedoraproject.org>
d134762 to
d1719c1
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml (1)
2231-2506: Add explicit negative tests fornodeSelectorandtolerationsinkubeStateMetricsConfig.
nodeSelectorandtolerationsare only exercised in the all-fields happy path (Line 2237 and Line 2243). Unlike sibling configs in this file, there are no reject cases for empty/oversized values, so regressions in those validations could slip through.🧪 Suggested test additions
+ - name: Should reject KubeStateMetricsConfig with empty nodeSelector + initial: | + apiVersion: config.openshift.io/v1alpha1 + kind: ClusterMonitoring + spec: + kubeStateMetricsConfig: + nodeSelector: {} + expectedError: 'spec.kubeStateMetricsConfig.nodeSelector: Invalid value: 0: spec.kubeStateMetricsConfig.nodeSelector in body should have at least 1 properties' + + - name: Should reject KubeStateMetricsConfig with more than 10 nodeSelector entries + initial: | + apiVersion: config.openshift.io/v1alpha1 + kind: ClusterMonitoring + spec: + kubeStateMetricsConfig: + nodeSelector: + key1: val1 + key2: val2 + key3: val3 + key4: val4 + key5: val5 + key6: val6 + key7: val7 + key8: val8 + key9: val9 + key10: val10 + key11: val11 + expectedError: 'spec.kubeStateMetricsConfig.nodeSelector: Too many: 11: must have at most 10 items' + + - name: Should reject KubeStateMetricsConfig with empty tolerations array + initial: | + apiVersion: config.openshift.io/v1alpha1 + kind: ClusterMonitoring + spec: + kubeStateMetricsConfig: + tolerations: [] + expectedError: 'spec.kubeStateMetricsConfig.tolerations: Invalid value: 0: spec.kubeStateMetricsConfig.tolerations in body should have at least 1 items' + + - name: Should reject KubeStateMetricsConfig with more than 10 tolerations + initial: | + apiVersion: config.openshift.io/v1alpha1 + kind: ClusterMonitoring + spec: + kubeStateMetricsConfig: + tolerations: + - key: "key1" + operator: "Exists" + - key: "key2" + operator: "Exists" + - key: "key3" + operator: "Exists" + - key: "key4" + operator: "Exists" + - key: "key5" + operator: "Exists" + - key: "key6" + operator: "Exists" + - key: "key7" + operator: "Exists" + - key: "key8" + operator: "Exists" + - key: "key9" + operator: "Exists" + - key: "key10" + operator: "Exists" + - key: "key11" + operator: "Exists" + expectedError: 'spec.kubeStateMetricsConfig.tolerations: Too many: 11: must have at most 10 items'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml` around lines 2231 - 2506, The test suite lacks negative cases for kubeStateMetricsConfig.nodeSelector and kubeStateMetricsConfig.tolerations; add reject tests that mirror existing patterns: (1) a case where kubeStateMetricsConfig.nodeSelector is {} and assert an expectedError about requiring at least one property; (2) a case where kubeStateMetricsConfig.nodeSelector has an empty key or invalid value (if schema enforces non-empty strings); (3) a case where kubeStateMetricsConfig.tolerations is [] and assert an expectedError about requiring at least one item; and (4) a case with too many tolerations (or duplicate tolerations) to assert the "Too many" or "Duplicate value" errors; reference the kubeStateMetricsConfig, nodeSelector, and tolerations keys when adding these new test entries consistent with the surrounding test format.config/v1alpha1/types_cluster_monitoring.go (1)
2502-2568: Consider extracting the shared scheduling/resources shape.This block re-declares the same
nodeSelector/resources/tolerations/topologySpreadConstraintsshape already present inOpenShiftStateMetricsConfig. Pulling the common part into a shared struct would reduce drift in docs and validation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/v1alpha1/types_cluster_monitoring.go` around lines 2502 - 2568, KubeStateMetricsConfig duplicates the scheduling/resources fields from OpenShiftStateMetricsConfig; extract a shared struct (e.g., PodSchedulingAndResources) containing NodeSelector, Resources, Tolerations, and TopologySpreadConstraints and replace those fields in both KubeStateMetricsConfig and OpenShiftStateMetricsConfig with a single embedded/shared field of that new struct; update kubebuilder tags and JSON names on the new struct to preserve validation (Min/MaxItems, listType/listMapKey, optional, etc.) and adjust any references to the original fields in code/comments to use the new struct.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@config/v1alpha1/types_cluster_monitoring.go`:
- Around line 2601-2627: KubeStateMetricsLabelName currently only enforces
length; add CEL validation and tighten the max length to enforce valid
Kubernetes label keys or the wildcard "*". Update the type
KubeStateMetricsLabelName to change the MaxLength from 256 to 253 and add a
+kubebuilder:validation:XValidation CEL rule that allows either self == "*" or a
DNS1123 subdomain prefix + "/" + qualified name (or no prefix) using split('/')
and format.dns1123Subdomain.validate for the prefix and a regex for the name
(^[A-Za-z0-9]([-A-Za-z0-9_.]*[A-Za-z0-9])?$), with an appropriate message like
"must be a valid Kubernetes label key or *"; keep the existing MinLength and
ensure the new validation applies to KubeStateMetricsLabelName so Labels
[]KubeStateMetricsLabelName inherits it.
---
Nitpick comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 2231-2506: The test suite lacks negative cases for
kubeStateMetricsConfig.nodeSelector and kubeStateMetricsConfig.tolerations; add
reject tests that mirror existing patterns: (1) a case where
kubeStateMetricsConfig.nodeSelector is {} and assert an expectedError about
requiring at least one property; (2) a case where
kubeStateMetricsConfig.nodeSelector has an empty key or invalid value (if schema
enforces non-empty strings); (3) a case where kubeStateMetricsConfig.tolerations
is [] and assert an expectedError about requiring at least one item; and (4) a
case with too many tolerations (or duplicate tolerations) to assert the "Too
many" or "Duplicate value" errors; reference the kubeStateMetricsConfig,
nodeSelector, and tolerations keys when adding these new test entries consistent
with the surrounding test format.
In `@config/v1alpha1/types_cluster_monitoring.go`:
- Around line 2502-2568: KubeStateMetricsConfig duplicates the
scheduling/resources fields from OpenShiftStateMetricsConfig; extract a shared
struct (e.g., PodSchedulingAndResources) containing NodeSelector, Resources,
Tolerations, and TopologySpreadConstraints and replace those fields in both
KubeStateMetricsConfig and OpenShiftStateMetricsConfig with a single
embedded/shared field of that new struct; update kubebuilder tags and JSON names
on the new struct to preserve validation (Min/MaxItems, listType/listMapKey,
optional, etc.) and adjust any references to the original fields in
code/comments to use the new struct.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: d4a84268-dde5-4503-a8ce-f4b5c2c51857
⛔ Files ignored due to path filters (6)
config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.deepcopy.gois excluded by!**/zz_generated*config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**config/v1alpha1/zz_generated.swagger_doc_generated.gois excluded by!**/zz_generated*openapi/generated_openapi/zz_generated.openapi.gois excluded by!openapi/**,!**/zz_generated*openapi/openapi.jsonis excluded by!openapi/**
📒 Files selected for processing (3)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yamlconfig/v1alpha1/types_cluster_monitoring.gopayload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml
| // KubeStateMetricsLabelName is the name of a Kubernetes label to expose as a metric | ||
| // via kube-state-metrics. Use "*" to expose all labels for a resource. | ||
| // Must be at least 1 character and at most 256 characters in length. | ||
| // +kubebuilder:validation:MinLength=1 | ||
| // +kubebuilder:validation:MaxLength=256 | ||
| type KubeStateMetricsLabelName string | ||
|
|
||
| // KubeStateMetricsResourceLabels defines which Kubernetes labels to expose as metrics | ||
| // for a given resource type in kube-state-metrics. | ||
| type KubeStateMetricsResourceLabels struct { | ||
| // resource is the Kubernetes resource name whose labels should be exposed as metrics. | ||
| // Currently, only "jobs" and "cronjobs" are supported due to cardinality concerns. | ||
| // Valid values are "jobs" and "cronjobs". | ||
| // This field is required. | ||
| // +required | ||
| Resource KubeStateMetricsResourceName `json:"resource,omitempty"` | ||
| // labels is the list of Kubernetes label names to expose as metrics for this resource. | ||
| // Use "*" to expose all labels for the specified resource. | ||
| // This field is required. | ||
| // Each label name must be unique within this list. | ||
| // Minimum length for this list is 1. | ||
| // Maximum length for this list is 50. | ||
| // +required | ||
| // +kubebuilder:validation:MinItems=1 | ||
| // +kubebuilder:validation:MaxItems=50 | ||
| // +listType=set | ||
| Labels []KubeStateMetricsLabelName `json:"labels,omitempty"` |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For Kubernetes CRD / kubebuilder validation, what is the recommended validation pattern or CEL format helper for a Kubernetes label key (optional DNS prefix + "/" + qualified name), and how should a special wildcard value "*" be handled?
💡 Result:
The recommended validation pattern for a Kubernetes label key in CRD/kubebuilder uses CEL with a custom expression that splits the key on '/' (max 2 parts), validates the optional prefix as DNS1123 subdomain using format.dns1123Subdomain, and the required name part (max 63 chars) against the label name regex ^A-Za-z0-9?$. For the special wildcard "" value, explicitly allow it with self == "" || . In kubebuilder Go struct: // +kubebuilder:validation:MaxLength=253 // +kubebuilder:validation:XValidation:rule=self == "" || (size(parts := self.split("/")) == 1 || (size(parts) == 2 && format.dns1123Subdomain.validate(parts[0]).empty && size(parts[1]) <= 63 && parts[1].matches("^A-Za-z0-9?$"))),message="must be a valid Kubernetes label key or " LabelKey string json:"labelKey,omitempty" Equivalent CRD OpenAPIv3Schema: properties: labelKey: type: string maxLength: 253 x-kubernetes-validations: - rule: 'self == "" || (size(parts := self.split("/")) == 1 || (size(parts) == 2 && !format.dns1123Subdomain.validate(parts[0]).hasValue && size(parts[1]) <= 63 && parts[1].matches("^A-Za-z0-9?$")))' message: must be a valid Kubernetes label key (optional DNS prefix/qualified name, max 253 chars total) or the wildcard "" This uses Kubernetes CEL format library for prefix validation (available since ~1.28), regex for name segment, and explicit wildcard handling. Prefix max 253 chars is enforced by field maxLength (name <=63 checked separately). No built-in "labelKey" format exists; custom CEL is standard practice per controller-tools discussions.
Citations:
- 1: https://kubernetes.io/docs/reference/using-api/cel/
- 2: https://kubernetes.io/docs/reference/using-api/cel
- 3: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
- 4: https://github.com/kubernetes/apimachinery/blob/5a348c53/pkg/api/validate/content/kube.go
- 5: Label key, value validation rules need to be defined with a label selector grammar. kubernetes/kubernetes#1297
- 6: Generated OpenAPI schema for metav1.LabelSelector accepts invalid input kubernetes-sigs/controller-tools#914
🏁 Script executed:
cat -n config/v1alpha1/types_cluster_monitoring.go | sed -n '2595,2630p'Repository: openshift/api
Length of output: 1936
Add CEL validation for Kubernetes label key format on KubeStateMetricsLabelName.
The type currently only constrains length, so invalid values like "bad label" (contains space) or "☃" (non-ASCII) would be admitted despite documentation describing this as a Kubernetes label name. Add CEL validation to enforce either the wildcard "*" or a valid Kubernetes label key format (optional DNS1123 prefix + "/" + qualified name matching ^[A-Za-z0-9]([-A-Za-z0-9_.]*[A-Za-z0-9])?$). Also reduce MaxLength from 256 to 253 to match the Kubernetes DNS name limit. Without this, invalid additionalResourceLabels can be stored and only fail downstream.
Suggested validation:
// +kubebuilder:validation:MaxLength=253
// +kubebuilder:validation:XValidation:rule="self == '*' || (size(parts := self.split('/')) == 1 || (size(parts) == 2 && format.dns1123Subdomain.validate(parts[0]) == '' && size(parts[1]) <= 63 && parts[1].matches('^[A-Za-z0-9]([-A-Za-z0-9_.]*[A-Za-z0-9])?$')))",message="must be a valid Kubernetes label key or *"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config/v1alpha1/types_cluster_monitoring.go` around lines 2601 - 2627,
KubeStateMetricsLabelName currently only enforces length; add CEL validation and
tighten the max length to enforce valid Kubernetes label keys or the wildcard
"*". Update the type KubeStateMetricsLabelName to change the MaxLength from 256
to 253 and add a +kubebuilder:validation:XValidation CEL rule that allows either
self == "*" or a DNS1123 subdomain prefix + "/" + qualified name (or no prefix)
using split('/') and format.dns1123Subdomain.validate for the prefix and a regex
for the name (^[A-Za-z0-9]([-A-Za-z0-9_.]*[A-Za-z0-9])?$), with an appropriate
message like "must be a valid Kubernetes label key or *"; keep the existing
MinLength and ensure the new validation applies to KubeStateMetricsLabelName so
Labels []KubeStateMetricsLabelName inherits it.
|
@danielmellado: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
| // as metrics by kube-state-metrics. Currently, only "jobs" and "cronjobs" are supported | ||
| // due to cardinality concerns. | ||
| // Valid values are "jobs" and "cronjobs". | ||
| // +kubebuilder:validation:Enum=jobs;cronjobs |
There was a problem hiding this comment.
(nit) should be Jobs and CronJobs to follow k8s API conventions?
Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.
Signed-off-by: Daniel Mellado dmellado@redhat.com